Universal Function Approximation by Deep Neural Nets with Bounded Width and ReLU Activations

نویسنده

  • Boris Hanin
چکیده

This article concerns the expressive power of depth in neural nets with ReLU activations and bounded width. We are particularly interested in the following questions: what is the minimal width wmin(d) so that ReLU nets of width wmin(d) (and arbitrary depth) can approximate any continuous function on the unit cube [0, 1] aribitrarily well? For ReLU nets near this minimal width, what can one say about the depth necessary to approximate a given function? We obtain an essentially complete answer to these questions for convex functions. Our approach is based on the observation that, due to the convexity of the ReLU activation, ReLU nets are particularly well-suited for representing convex functions. In particular, we prove that ReLU nets with width d+1 can approximate any continuous convex function of d variables arbitrarily well. Moreover, when approximating convex, piecewise affine functions by such nets, we obtain matching upper and lower bounds on the required depth, proving that our construction is essentially optimal. These results then give quantitative depth estimates for the rate of approximation of any continuous scalar function on the d-dimensional cube [0, 1] by ReLU nets with width d+ 3.

برای دانلود رایگان متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

ثبت نام

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

منابع مشابه

Approximating Continuous Functions by ReLU Nets of Minimal Width

This article concerns the expressive power of depth in deep feed-forward neural nets with ReLU activations. Specifically, we answer the following question: for a fixed d ≥ 1, what is the minimal width w so that neural nets with ReLU activations, input dimension d, hidden layer widths at most w, and arbitrary depth can approximate any continuous function of d variables arbitrarily well. It turns...

متن کامل

The Expressive Power of Neural Networks: A View from the Width

The expressive power of neural networks is important for understanding deep learning. Most existing works consider this problem from the view of the depth of a network. In this paper, we study how width affects the expressiveness of neural networks. Classical results state that depth-bounded (e.g. depth-2) networks with suitable activation functions are universal approximators. We show a univer...

متن کامل

Deep Semi-Random Features for Nonlinear Function Approximation

We propose semi-random features for nonlinear function approximation. The flexibility of semirandom feature lies between the fully adjustable units in deep learning and the random features used in kernel methods. For one hidden layer models with semi-random features, we prove with no unrealistic assumptions that the model classes contain an arbitrarily good function as the width increases (univ...

متن کامل

Neural Network with Unbounded Activations is Universal Approximator

Abstract This paper presents an investigation of the approximation property of neural networks with unbounded activation functions, such as the rectified linear unit (ReLU), which is the new de-facto standard of deep learning. The ReLU network can be analyzed by the ridgelet transform with respect to Lizorkin distributions. By showing three reconstruction formulas by using the Fourier slice the...

متن کامل

Understanding Deep Neural Networks with Rectified Linear Units

In this paper we investigate the family of functions representable by deep neural networks (DNN) with rectified linear units (ReLU). We give an algorithm to train a ReLU DNN with one hidden layer to global optimality with runtime polynomial in the data size albeit exponential in the input dimension. Further, we improve on the known lower bounds on size (from exponential to super exponential) fo...

متن کامل

ذخیره در منابع من


  با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید

عنوان ژورنال:
  • CoRR

دوره abs/1708.02691  شماره 

صفحات  -

تاریخ انتشار 2017